145 research outputs found

    Topographic map visualization from adaptively compressed textures

    Get PDF
    Raster-based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high-frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally-adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily-sized texel runs, a cumulative run-length encoding supporting fast random-access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics hardware allowing real-time GPU decompression and rendering of bilinear-filtered topographic maps.Postprint (published version

    MeshPipe: a Python-based tool for easy automation and demonstration of geometry processing pipelines

    Get PDF
    The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the user's point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C library exposed to the viewer via Python-C bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.Peer ReviewedPostprint (published version

    Single-picture reconstruction and rendering of trees for plausible vegetation synthesis

    Get PDF
    State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.Peer ReviewedPostprint (author's final draft

    Relief mapping on cubic cell complexes

    Get PDF
    In this paper we present an algorithm for parameterizing arbitrary surfaces onto a quadrilateral domain defined by a collection of cubic cells. The parameterization inside each cell is implicit and thus requires storing no texture coordinates. Based upon this parameterization, we propose a unified representation of geometric and appearance information of complex models. The representation consists of a set of cubic cells (providing a coarse representation of the object) together with a collection of distance maps (encoding fine geometric detail inside each cell). Our new representation has similar uses than geometry images, but it requires storing a single distance value per texel instead of full vertex coordinates. When combined with color and normal maps, our representation can be used to render an approximation of the model through an output-sensitive relief mapping algorithm, thus being specially amenable for GPU raytracing.Postprint (author’s final draft

    A platform for developing and fine tuning adaptive 3D navigation techniques for the immersive web

    Get PDF
    Navigating through a virtual environment is one of the major user tasks in the 3D web. Although hundreds of interaction techniques have been proposed to navigate through 3D scenes in desktop, mobile and VR headset systems, 3D navigation still poses a high entry barrier for many potential users. In this paper we discuss the design and implementation of a test platform to facilitate the creation and fine-tuning of interaction techniques for 3D navigation. We support the most common navigation metaphors (walking, flying, teleportation). The key idea is to let developers specify, at runtime, the exact mapping between user actions and virtual camera changes, for any of the supported metaphors. We demonstrate through many examples how this method can be used to adapt the navigation techniques to various people including persons with no previous 3D navigation skills, elderly people, and people with disabilities.This work has been partially funded by the Spanish Ministry of Economy and Competitiveness and FEDER under grant TIN2017-88515-C2-1-R, by EU Horizon 2020, JPICH Conservation, Protection and Use initiative (JPICH-0127) and the Spanish Agencia Estatal de Investigación, grant PCI2020-111979 Enhancement of Heritage Experiences: the Middle Ages; Digital Layered Models of Architecture and Mural Paintings over Time (EHEM).Peer ReviewedPostprint (author's final draft

    Designing, testing and adapting navigation techniques for the immersive web

    Get PDF
    One of the most essential interactions in Virtual Reality (VR) is the user’s ability to move around and explore the virtual environment. The design of the navigation technique plays a crucial role in the user experience since it determines key usability aspects. VR devices allow for an immersive exploration of 3D worlds, but navigation in VR is challenging for many users, due to potential usability issues related to specific VR controllers, user skills, and motion sickness. Although hundreds of interaction techniques have been proposed for this task, VR navigation still poses a high entry barrier for many users. In this paper we argue that adapting the navigation technique to its context of use can lead to substantial improvements in navigation usability and accessibility. The context of use includes the type of scene, the available physical space, as well as the profile of the user. We present a test platform to facilitate the design and fine-tuning of interaction techniques for 3D navigation. We focus on mainstream VR devices (headsets and controllers) and support the most common navigation metaphors (walking, flying, teleportation). The key idea is to let developers specify, at runtime, the exact mapping between user actions and locomotion changes, for any of the supported metaphors. Such mappings are described by a collection of parameters (e.g. maximum speed) whose values can be adjusted interactively through a GUI, or be provided by user-defined code which can be edited at runtime. Feedback obtained from developers suggests that this approach can be used to quickly adapt the navigation techniques to various people including persons with no previous 3D navigation skills, elderly people, and people with disabilities, as well as to the type, size and semantics of the virtual environment.This work has been funded by MCIN/AEI/10.13039/501100011033/FEDER ‘‘A way to make Europe’’. Pedret model partially funded by EU Horizon 2020, JPICH Conservation, Protection and Use initiative (JPICH-0127) and the Spanish Agencia Estatal de Investigación, grant PCI2020-111979 Enhancement of Heritage Experiences: the Middle Ages; Digital Layered Models of Architecture and Mural Paintings over Time (EHEM)Peer ReviewedPostprint (published version

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs

    Get PDF
    The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the user’s hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users’ feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.Peer ReviewedPostprint (author’s final draft

    Dynamic worlds in miniature

    Get PDF
    The World in Miniature (WIM) metaphor allows users to interact and travel efficiently in virtual environments. In addition to the first-person perspective offered by typical VR applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the virtual environment. In the original WIM paper the miniature was a scaled down replica of the whole environment, thus limiting the technique to simple models being manipulated at a single level of scale. Several WIM extensions have been proposed where the replica shows only a part of the virtual environment. In this paper we present an improved visualization of WIM that supports arbitrarily-complex, densely-occluded scenes. In particular, we discuss algorithms for selecting the region of the virtual environment which will be covered by the miniature copy and efficient algorithms for handling 3D occlusion from an exocentric viewpoint.Peer ReviewedPostprint (author’s final draft

    Image-based tree variations

    Get PDF
    The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette-defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.Peer ReviewedPostprint (author's final draft
    • …
    corecore